346 research outputs found

    Smoothed Analysis of Dynamic Networks

    Full text link
    We generalize the technique of smoothed analysis to distributed algorithms in dynamic network models. Whereas standard smoothed analysis studies the impact of small random perturbations of input values on algorithm performance metrics, dynamic graph smoothed analysis studies the impact of random perturbations of the underlying changing network graph topologies. Similar to the original application of smoothed analysis, our goal is to study whether known strong lower bounds in dynamic network models are robust or fragile: do they withstand small (random) perturbations, or do such deviations push the graphs far enough from a precise pathological instance to enable much better performance? Fragile lower bounds are likely not relevant for real-world deployment, while robust lower bounds represent a true difficulty caused by dynamic behavior. We apply this technique to three standard dynamic network problems with known strong worst-case lower bounds: random walks, flooding, and aggregation. We prove that these bounds provide a spectrum of robustness when subjected to smoothing---some are extremely fragile (random walks), some are moderately fragile / robust (flooding), and some are extremely robust (aggregation).Comment: 20 page

    Small ball probability for the condition number of random matrices

    Full text link
    Let AA be an n×nn\times n random matrix with i.i.d. entries of zero mean, unit variance and a bounded subgaussian moment. We show that the condition number smax(A)/smin(A)s_{\max}(A)/s_{\min}(A) satisfies the small ball probability estimate P{smax(A)/smin(A)n/t}2exp(ct2),t1,{\mathbb P}\big\{s_{\max}(A)/s_{\min}(A)\leq n/t\big\}\leq 2\exp(-c t^2),\quad t\geq 1, where c>0c>0 may only depend on the subgaussian moment. Although the estimate can be obtained as a combination of known results and techniques, it was not noticed in the literature before. As a key step of the proof, we apply estimates for the singular values of AA, P{snk+1(A)ck/n}2exp(ck2),1kn,{\mathbb P}\big\{s_{n-k+1}(A)\leq ck/\sqrt{n}\big\}\leq 2 \exp(-c k^2), \quad 1\leq k\leq n, obtained (under some additional assumptions) by Nguyen.Comment: Some changes according to the Referee's comment

    Smoothed Analysis of the Minimum-Mean Cycle Canceling Algorithm and the Network Simplex Algorithm

    Get PDF
    The minimum-cost flow (MCF) problem is a fundamental optimization problem with many applications and seems to be well understood. Over the last half century many algorithms have been developed to solve the MCF problem and these algorithms have varying worst-case bounds on their running time. However, these worst-case bounds are not always a good indication of the algorithms' performance in practice. The Network Simplex (NS) algorithm needs an exponential number of iterations for some instances, but it is considered the best algorithm in practice and performs best in experimental studies. On the other hand, the Minimum-Mean Cycle Canceling (MMCC) algorithm is strongly polynomial, but performs badly in experimental studies. To explain these differences in performance in practice we apply the framework of smoothed analysis. We show an upper bound of O(mn2log(n)log(ϕ))O(mn^2\log(n)\log(\phi)) for the number of iterations of the MMCC algorithm. Here nn is the number of nodes, mm is the number of edges, and ϕ\phi is a parameter limiting the degree to which the edge costs are perturbed. We also show a lower bound of Ω(mlog(ϕ))\Omega(m\log(\phi)) for the number of iterations of the MMCC algorithm, which can be strengthened to Ω(mn)\Omega(mn) when ϕ=Θ(n2)\phi=\Theta(n^2). For the number of iterations of the NS algorithm we show a smoothed lower bound of Ω(mmin{n,ϕ}ϕ)\Omega(m \cdot \min \{ n, \phi \} \cdot \phi).Comment: Extended abstract to appear in the proceedings of COCOON 201

    Optimal Grid Drawings of Complete Multipartite Graphs and an Integer Variant of the Algebraic Connectivity

    Full text link
    How to draw the vertices of a complete multipartite graph GG on different points of a bounded dd-dimensional integer grid, such that the sum of squared distances between vertices of GG is (i) minimized or (ii) maximized? For both problems we provide a characterization of the solutions. For the particular case d=1d=1, our solution for (i) also settles the minimum-2-sum problem for complete bipartite graphs; the minimum-2-sum problem was defined by Juvan and Mohar in 1992. Weighted centroidal Voronoi tessellations are the solution for (ii). Such drawings are related with Laplacian eigenvalues of graphs. This motivates us to study which properties of the algebraic connectivity of graphs carry over to the restricted setting of drawings of graphs with integer coordinates.Comment: Appears in the Proceedings of the 26th International Symposium on Graph Drawing and Network Visualization (GD 2018

    Exciton Condensation and Perfect Coulomb Drag

    Get PDF
    Coulomb drag is a process whereby the repulsive interactions between electrons in spatially separated conductors enable a current flowing in one of the conductors to induce a voltage drop in the other. If the second conductor is part of a closed circuit, a net current will flow in that circuit. The drag current is typically much smaller than the drive current owing to the heavy screening of the Coulomb interaction. There are, however, rare situations in which strong electronic correlations exist between the two conductors. For example, bilayer two-dimensional electron systems can support an exciton condensate consisting of electrons in one layer tightly bound to holes in the other. One thus expects "perfect" drag; a transport current of electrons driven through one layer is accompanied by an equal one of holes in the other. (The electrical currents are therefore opposite in sign.) Here we demonstrate just this effect, taking care to ensure that the electron-hole pairs dominate the transport and that tunneling of charge between the layers is negligible.Comment: 12 pages, 4 figure

    Bounds on the Complexity of Halfspace Intersections when the Bounded Faces have Small Dimension

    Full text link
    We study the combinatorial complexity of D-dimensional polyhedra defined as the intersection of n halfspaces, with the property that the highest dimension of any bounded face is much smaller than D. We show that, if d is the maximum dimension of a bounded face, then the number of vertices of the polyhedron is O(n^d) and the total number of bounded faces of the polyhedron is O(n^d^2). For inputs in general position the number of bounded faces is O(n^d). For any fixed d, we show how to compute the set of all vertices, how to determine the maximum dimension of a bounded face of the polyhedron, and how to compute the set of bounded faces in polynomial time, by solving a polynomial number of linear programs

    A Statistical Performance Analysis of Graph Clustering Algorithms

    Get PDF
    Measuring graph clustering quality remains an open problem. Here, we introduce three statistical measures to address the problem. We empirically explore their behavior under a number of stress test scenarios and compare it to the commonly used modularity and conductance. Our measures are robust, immune to resolution limit, easy to intuitively interpret and also have a formal statistical interpretation. Our empirical stress test results confirm that our measures compare favorably to the established ones. In particular, they are shown to be more responsive to graph structure, less sensitive to sample size and breakdowns during numerical implementation and less sensitive to uncertainty in connectivity. These features are especially important in the context of larger data sets or when the data may contain errors in the connectivity patterns

    On encoding symbol degrees of array BP-XOR codes

    Get PDF
    Low density parity check (LDPC) codes, LT codes and digital fountain techniques have received significant attention from both academics and industry in the past few years. By employing the underlying ideas of efficient Belief Propagation (BP) decoding process (also called iterative message passing decoding process) on binary erasure channels (BEC) in LDPC codes, Wang has recently introduced the concept of array BP-XOR codes and showed the necessary and sufficient conditions for MDS [k + 2,k] and [n,2] array BP-XOR codes. In this paper, we analyze the encoding symbol degree requirements for array BP-XOR codes and present new necessary conditions for array BP-XOR codes. These new necessary conditions are used as a guideline for constructing several array BP-XOR codes and for presenting a complete characterization (necessary and sufficient conditions) of degree two array BP-XOR codes and for designing new edge-colored graphs. Meanwhile, these new necessary conditions are used to show that the codes by Feng, Deng, Bao, and Shen in IEEE Transactions on Computers are incorrect

    Genome-wide association filtering using a highly locus-specific transmission/disequilibrium test

    Get PDF
    Multimarker transmission/disequilibrium tests (TDTs) are powerful association and linkage tests used to perform genome-wide filtering in the search for disease susceptibility loci. In contrast to case/control studies, they have a low rate of false positives for population stratification and admixture. However, the length of a region found in association with a disease is usually very large because of linkage disequilibrium (LD). Here, we define a multimarker proportional TDT (mTDTP) designed to improve locus specificity in complex diseases that has good power compared to the most powerful multimarker TDTs. The test is a simple generalization of a multimarker TDT in which haplotype frequencies are used to weight the effect that each haplotype has on the whole measure. Two concepts underlie the features of the metric: the ‘common disease, common variant’ hypothesis and the decrease in LD with chromosomal distance. Because of this decrease, the frequency of haplotypes in strong LD with common disease variants decreases with increasing distance from the disease susceptibility locus. Thus, our haplotype proportional test has higher locus specificity than common multimarker TDTs that assume a uniform distribution of haplotype probabilities. Because of the common variant hypothesis, risk haplotypes at a given locus are relatively frequent and a metric that weights partial results for each haplotype by its frequency will be as powerful as the most powerful multimarker TDTs. Simulations and real data sets demonstrate that the test has good power compared with the best tests but has remarkably higher locus specificity, so that the association rate decreases at a higher rate with distance from a disease susceptibility or disease protective locus

    Widespread Translocation from Autosomes to Sex Chromosomes Preserves Genetic Variability in an Endangered Lark

    Get PDF
    Species that pass repeatedly through narrow population bottlenecks (<100 individuals) are likely to have lost a large proportion of their genetic variation. Having genotyped 92 Raso larks Alauda razae, a Critically Endangered single-island endemic whose world population in the Cape Verdes over the last 100 years has fluctuated between about 15 and 130 pairs, we found variation at 7 of 21 microsatellite loci that successfully amplified, the remaining loci being monomorphic. At 6 of the polymorphic loci variation was sex-linked, despite the fact that these microsatellites were not sex-linked in the other passerine birds where they were developed. Comparative analysis strongly suggests that material from several different autosomes has been recently transferred to the sex chromosomes in larks. Sex-linkage might plausibly allow some level of heterozygosity to be maintained, even in the face of persistently small population sizes
    corecore